贝叶斯优化是黑匣子功能优化的流行框架。多重方法方法可以通过利用昂贵目标功能的低保真表示来加速贝叶斯优化。流行的多重贝叶斯策略依赖于采样政策,这些策略解释了在特定意见下评估目标函数的立即奖励,从而排除了更多的信息收益,这些收益可能会获得更多的步骤。本文提出了一个非侧重多倍数贝叶斯框架,以掌握优化的未来步骤的长期奖励。我们的计算策略具有两步的lookahead多因素采集函数,可最大程度地提高累积奖励,从而测量解决方案的改进,超过了前面的两个步骤。我们证明,所提出的算法在流行的基准优化问题上优于标准的多尺寸贝叶斯框架。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
One of the major challenges in Deep Reinforcement Learning for control is the need for extensive training to learn the policy. Motivated by this, we present the design of the Control-Tutored Deep Q-Networks (CT-DQN) algorithm, a Deep Reinforcement Learning algorithm that leverages a control tutor, i.e., an exogenous control law, to reduce learning time. The tutor can be designed using an approximate model of the system, without any assumption about the knowledge of the system's dynamics. There is no expectation that it will be able to achieve the control objective if used stand-alone. During learning, the tutor occasionally suggests an action, thus partially guiding exploration. We validate our approach on three scenarios from OpenAI Gym: the inverted pendulum, lunar lander, and car racing. We demonstrate that CT-DQN is able to achieve better or equivalent data efficiency with respect to the classic function approximation solutions.
translated by 谷歌翻译
To make machine learning (ML) sustainable and apt to run on the diverse devices where relevant data is, it is essential to compress ML models as needed, while still meeting the required learning quality and time performance. However, how much and when an ML model should be compressed, and {\em where} its training should be executed, are hard decisions to make, as they depend on the model itself, the resources of the available nodes, and the data such nodes own. Existing studies focus on each of those aspects individually, however, they do not account for how such decisions can be made jointly and adapted to one another. In this work, we model the network system focusing on the training of DNNs, formalize the above multi-dimensional problem, and, given its NP-hardness, formulate an approximate dynamic programming problem that we solve through the PACT algorithmic framework. Importantly, PACT leverages a time-expanded graph representing the learning process, and a data-driven and theoretical approach for the prediction of the loss evolution to be expected as a consequence of training decisions. We prove that PACT's solutions can get as close to the optimum as desired, at the cost of an increased time complexity, and that, in any case, such complexity is polynomial. Numerical results also show that, even under the most disadvantageous settings, PACT outperforms state-of-the-art alternatives and closely matches the optimal energy cost.
translated by 谷歌翻译
Prescriptive Process Monitoring systems recommend, during the execution of a business process, interventions that, if followed, prevent a negative outcome of the process. Such interventions have to be reliable, that is, they have to guarantee the achievement of the desired outcome or performance, and they have to be flexible, that is, they have to avoid overturning the normal process execution or forcing the execution of a given activity. Most of the existing Prescriptive Process Monitoring solutions, however, while performing well in terms of recommendation reliability, provide the users with very specific (sequences of) activities that have to be executed without caring about the feasibility of these recommendations. In order to face this issue, we propose a new Outcome-Oriented Prescriptive Process Monitoring system recommending temporal relations between activities that have to be guaranteed during the process execution in order to achieve a desired outcome. This softens the mandatory execution of an activity at a given point in time, thus leaving more freedom to the user in deciding the interventions to put in place. Our approach defines these temporal relations with Linear Temporal Logic over finite traces patterns that are used as features to describe the historical process data recorded in an event log by the information systems supporting the execution of the process. Such encoded log is used to train a Machine Learning classifier to learn a mapping between the temporal patterns and the outcome of a process execution. The classifier is then queried at runtime to return as recommendations the most salient temporal patterns to be satisfied to maximize the likelihood of a certain outcome for an input ongoing process execution. The proposed system is assessed using a pool of 22 real-life event logs that have already been used as a benchmark in the Process Mining community.
translated by 谷歌翻译
社交机器人是一种自主机器人,通过参与其协作角色附带的社会情感行为,技能,能力和规则,与人们互动。为了实现这些目标,我们认为建模与用户的互动并将机器人行为调整为用户本人对其社会角色至关重要。本文提出了我们首次尝试将用户建模功能集成到社交和情感机器人中。我们提出了一种基于云的体系结构,用于建模用户机器人交互,以便使用不同类型的社交机器人重复使用该方法。
translated by 谷歌翻译
数据的表示对于机器学习方法至关重要。内核方法用于丰富特征表示,从而可以更好地概括。量子内核有效地实施了在量子系统的希尔伯特空间中编码经典数据的有效复杂的转换,甚至导致指数加速。但是,我们需要对数据的先验知识来选择可以用作量子嵌入的适当参数量子电路。我们提出了一种算法,该算法通过组合优化过程自动选择最佳的量子嵌入过程,该过程修改了电路的结构,更改门的发生器,其角度(取决于数据点)以及各种门的QUBIT行为。由于组合优化在计算上是昂贵的,因此我们基于均值周围的核基质系数的指数浓度引入了一个标准,以立即丢弃任意大部分的溶液,这些溶液被认为性能较差。与基于梯度的优化(例如可训练的量子内核)相反,我们的方法不受建筑贫瘠的高原影响。我们已经使用人工和现实数据集来证明相对于随机生成的PQC的方法的提高。我们还比较了不同优化算法的效果,包括贪婪的局部搜索,模拟退火和遗传算法,表明算法选择在很大程度上影响了结果。
translated by 谷歌翻译
该技术报告描述了在Robocup SPL(Mario)中计算视觉统计的模块化且可扩展的体系结构,该结构在Robocup 2022的SPL Open Research Challenge期间提出,该挑战在曼谷(泰国)举行。马里奥(Mario)是一个开源的,可用的软件应用程序,其最终目标是为Robocup SPL社区的发展做出贡献。Mario带有一个GUI,该GUI集成了多个机器学习和基于计算机视觉的功能,包括自动摄像机校准,背景减法,同型计算,玩家 +球跟踪和本地化,NAO机器人姿势估计和跌落检测。马里奥(Mario)被排名第一。1在开放研究挑战中。
translated by 谷歌翻译
Quask是用Python编写的量子机学习软件,可支持研究人员设计,实验和评估不同的量子和经典核性能。该软件是无关紧要的软件包,可以与所有主要量子软件包(例如IBM Qiskit,Xanadu的Pennylane,Amazon Braket)集成。问题通过简单的预处理数据,定义和计算量子和经典内核,无论是自定义或预定义的,都可以指导用户。通过此评估,包装提供了有关潜在的量子优势和对概括误差的预测界限的评估。此外,它允许生成参数量子内核,可以使用基于梯度的优化,网格搜索或遗传算法训练。还计算了预测的量子内核,这是一种有效的解决方案,可减轻大型希尔伯特空间的指数缩放维度引起的维数的诅咒。问题还可以生成量子模型的可观察值,并使用它们来研究量子和经典核的预测能力。
translated by 谷歌翻译
最小化能量的动力系统在几何和物理学中无处不在。我们为GNN提出了一个梯度流框架,其中方程遵循可学习能量的最陡峭下降的方向。这种方法允许从多粒子的角度来解释GNN的演变,以通过对称“通道混合”矩阵的正和负特征值在特征空间中学习吸引力和排斥力。我们对溶液进行光谱分析,并得出结论,梯度流量图卷积模型可以诱导以图高频为主导的动力学,这对于异性数据集是理想的。我们还描述了对常见GNN体系结构的结构约束,从而将其解释为梯度流。我们进行了彻底的消融研究,以证实我们的理论分析,并在现实世界同质和异性数据集上显示了简单和轻量级模型的竞争性能。
translated by 谷歌翻译